Goto

Collaborating Authors

 uncertainty principle




ACentralLimitTheoremforDifferentiallyPrivate QueryAnswering

Neural Information Processing Systems

The central question is,therefore, tounderstand which noise distribution optimizes the privacy-accuracy trade-off, especially when the dimension of the answer vector ishigh.


AnUncertaintyPrincipleisaPriceof Privacy-PreservingMicrodata

Neural Information Processing Systems

Privacy-protected microdata are often the desired output of a differentially private algorithm since microdata isfamiliar and convenient for downstream users. However, there is a statistical price for this kind of convenience.


Quantum neural network may be able to cheat the uncertainty principle

New Scientist

The Heisenberg uncertainty principle puts a limit on how precisely we can measure certain properties of quantum objects. But researchers may have found a way to bypass this limitation using a quantum version of a neural network. Given, for example, a chemically useful molecule, how can you predict what properties it might have in an hour or tomorrow? To make such predictions, researchers start by measuring its current properties. But for quantum objects, including some molecules, this can be unexpectedly difficult because each measurement can interfere with or change the outcome of the next measurement.


An Uncertainty Principle is a Price of Privacy-Preserving Microdata

Neural Information Processing Systems

Privacy-protected microdata are often the desired output of a differentially private algorithm since microdata is familiar and convenient for downstream users. However, there is a statistical price for this kind of convenience. We show that an uncertainty principle governs the trade-off between accuracy for a population of interest (``sum query'') vs. accuracy for its component sub-populations (``point queries''). Compared to differentially private query answering systems that are not required to produce microdata, accuracy can degrade by a logarithmic factor. For example, in the case of pure differential privacy, without the microdata requirement, one can provide noisy answers to the sum query and all point queries while guaranteeing that each answer has squared error $O(1/\epsilon^2)$. With the microdata requirement, one must choose between allowing an additional $\log^2(d)$ factor ($d$ is the number of point queries) for some point queries or allowing an extra $O(d^2)$ factor for the sum query. We present lower bounds for pure, approximate, and concentrated differential privacy. We propose mitigation strategies and create a collection of benchmark datasets that can be used for public study of this problem.



Uncertainty in Authorship: Why Perfect AI Detection Is Mathematically Impossible

Ganie, Aadil Gani

arXiv.org Artificial Intelligence

As large language models (LLMs) become more advanced, it is increasingly difficult to distinguish between human-written and AI-generated text. This paper draws a conceptual parallel between quantum uncertainty and the limits of authorship detection in natural language. We argue that there is a fundamental trade-off: the more confidently one tries to identify whether a text was written by a human or an AI, the more one risks disrupting the text's natural flow and authenticity. This mirrors the tension between precision and disturbance found in quantum systems. We explore how current detection methods--such as stylometry, watermarking, and neural classifiers--face inherent limitations. Enhancing detection accuracy often leads to changes in the AI's output, making other features less reliable. In effect, the very act of trying to detect AI authorship introduces uncertainty elsewhere in the text. Our analysis shows that when AI-generated text closely mimics human writing, perfect detection becomes not just technologically difficult but theoretically impossible. We address counterarguments and discuss the broader implications for authorship, ethics, and policy. Ultimately, we suggest that the challenge of AI-text detection is not just a matter of better tools--it reflects a deeper, unavoidable tension in the nature of language itself.